Function Approximation by Three-Layered Networks and Its Error Bounds | An Integral Representation Theorem
نویسنده
چکیده
Neural Networks are widely noticed to provide a nonlinear function approximation method. In order to make its approximation ability clear, a new theorem on an integral transform of ridge functions is presented. By using this theorem, an approximation bound, which clari es the quantitative relationship between the approximation accuracy and the number of elements in the hidden layer, can be obtained. This result shows that the approximation accuracy depends on the smoothness of target functions. It also shows that the approximation methods which use ridge functions are free from \curse of dimensionality".
منابع مشابه
An Integral Representation of Functions Using Three-layered Networks and Their Approximation Bounds
Neural networks are widely known to provide a method of approximating nonlinear functions. In order to clarify its approximation ability, a new theorem on an integral transform of ridge functions is presented. By using this theorem, an approximation bound, which evaluates the quantitative relationship between the approximation accuracy and the number of elements in the hidden layer, can be obta...
متن کاملError Bounds for Approximation with Neural Networks
In this paper we prove convergence rates for the problem of approximating functions f by neural networks and similar constructions. We show that the rates are the better the smoother the activation functions are, provided that f satisses an integral representation. We give error bounds not only in Hilbert spaces but in general Sobolev spaces W m;r ((). Finally, we apply our results to a class o...
متن کاملMean value theorem for integrals and its application on numerically solving of Fredholm integral equation of second kind with Toeplitz plus Hankel Kernel
The subject of this paper is the solution of the Fredholm integral equation with Toeplitz, Hankel and the Toeplitz plus Hankel kernel. The mean value theorem for integrals is applied and then extended for solving high dimensional problems and finally, some example and graph of error function are presented to show the ability and simplicity of the method.
متن کاملGeoid Determination Based on Log Sigmoid Function of Artificial Neural Networks: (A case Study: Iran)
A Back Propagation Artificial Neural Network (BPANN) is a well-known learning algorithmpredicated on a gradient descent method that minimizes the square error involving the networkoutput and the goal of output values. In this study, 261 GPS/Leveling and 8869 gravity intensityvalues of Iran were selected, then the geoid with three methods “ellipsoidal stokes integral”,“BPANN”, and “collocation” ...
متن کاملEfficient Simulation of Stable Random Fields and Its Applications
Two methods to approximate stable random fields are presented. The methods are based on approximating the kernel function in the integral representation of such fields. Error bounds for the approximation error are derived and the approximations are used to simulate stable random fields. The simulation methodology is applied to a portfolio of storm insurance policies in order to spatially predic...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1994